A group of high-profile people, including Musk and Wozniak, have signed an open letter to stop the use of artificial intelligence. The letter was posted on the website of the Future of Life Institute. It said that the training of artificial intelligence systems more powerful than GPT-4 should be paused for at least 6 months.
Advertisement
What are the concerns raised here, and what have these individuals proposed, even as artificial intelligence continues to develop into better and more efficient error-free models? What are the concerns raised here, and what have these individuals proposed, despite the development of better and more efficient error-free models? A new stage of technological development has been heralded as a new stage of technological development that will change how people seek out information.
Advertisement
The technical efficiency of Chat GPt has received almost universal appreciation, even though there is no consensus on how true these claims turn out to be in the future and whether it is all that it is promised to be right now. It has been able to clear the US bar exam in order to become a lawyer. The letter was posted on the website of FLI, which describes its work as engaging in grantmaking, policy research and advocacy. The letter was signed by researchers, tech CEOs and other notable figures. The letter states that the systems have human-competitive intelligence.
Advertisement
This could represent a profound change in the history of life on Earth, and should be planned and managed with care and resources. They claim that this level of planning and management is not happening, even as an "out-of-control race" has been on to develop new "digital minds" that not even their creators can understand. Should machines flood our information channels with propaganda and untruth, should we automate away all the jobs, and should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? There is a suggestion of a six-month pause on training systems that are more powerful than GPT-4. All key actors should be included in the pause.
If a pause can't be enacted quickly, governments should institute a moratorium. They say that companies need to develop a set of shared safety protocols that can be overseen by outside experts. A proper framework with a legal structure and foolproofing is proposed, including watermarking systems to help distinguish real from synthetic, and robust public funding for technical artificial intelligence safety research.
Advertisement
In the past, OpenAI has used cautionary language to talk about the impact of artificial intelligence. As we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. A gradual transition to a world with AGI is the best way to steward it. The rate of progress in the world is expected to be much faster with the help of powerful artificial intelligence. It didn't respond to the letter immediately. Musk was one of the initial funders of OpenAI in 2015, but he stepped away from it claiming a conflict of interest as he was also looking at artificial intelligence.